Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.888
Filtrar
1.
Methods Mol Biol ; 2787: 3-38, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38656479

RESUMO

In this chapter, we explore the application of high-throughput crop phenotyping facilities for phenotype data acquisition and the extraction of significant information from the collected data through image processing and data mining methods. Additionally, the construction and outlook of crop phenotype databases are introduced and the need for global cooperation and data sharing is emphasized. High-throughput crop phenotyping significantly improves accuracy and efficiency compared to traditional measurements, making significant contributions to overcoming bottlenecks in the phenotyping field and advancing crop genetics.


Assuntos
Produtos Agrícolas , Mineração de Dados , Processamento de Imagem Assistida por Computador , Fenótipo , Produtos Agrícolas/genética , Produtos Agrícolas/crescimento & desenvolvimento , Mineração de Dados/métodos , Processamento de Imagem Assistida por Computador/métodos , Gerenciamento de Dados/métodos , Ensaios de Triagem em Larga Escala/métodos
2.
Methods Mol Biol ; 2787: 315-332, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38656500

RESUMO

Structural insights into macromolecular and protein complexes provide key clues about the molecular basis of the function. Cryogenic electron microscopy (cryo-EM) has emerged as a powerful structural biology method for studying protein and macromolecular structures at high resolution in both native and near-native states. Despite the ability to get detailed structural insights into the processes underlying protein function using cryo-EM, there has been hesitancy amongst plant biologists to apply the method for biomolecular interaction studies. This is largely evident from the relatively fewer structural depositions of proteins and protein complexes from plant origin in electron microscopy databank. Even though the progress has been slow, cryo-EM has significantly contributed to our understanding of the molecular biology processes underlying photosynthesis, energy transfer in plants, besides viruses infecting plants. This chapter introduces sample preparation for both negative-staining electron microscopy (NSEM) and cryo-EM for plant proteins and macromolecular complexes and data analysis using single particle analysis for beginners.


Assuntos
Microscopia Crioeletrônica , Substâncias Macromoleculares , Microscopia Crioeletrônica/métodos , Substâncias Macromoleculares/ultraestrutura , Substâncias Macromoleculares/química , Substâncias Macromoleculares/metabolismo , Proteínas de Plantas/metabolismo , Proteínas de Plantas/ultraestrutura , Proteínas de Plantas/química , Coloração Negativa/métodos
3.
Sci Rep ; 14(1): 9554, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664440

RESUMO

While deep learning has become the go-to method for image denoising due to its impressive noise removal capabilities, excessive network depth often plagues existing approaches, leading to significant computational burdens. To address this critical bottleneck, we propose a novel lightweight progressive residual and attention mechanism fusion network that effectively alleviates these limitations. This architecture tackles both Gaussian and real-world image noise with exceptional efficacy. Initiated through dense blocks (DB) tasked with discerning the noise distribution, this approach substantially reduces network parameters while comprehensively extracting local image features. The network then adopts a progressive strategy, whereby shallow convolutional features are incrementally integrated with deeper features, establishing a residual fusion framework adept at extracting encompassing global features relevant to noise characteristics. The process concludes by integrating the output feature maps from each DB and the robust edge features from the convolutional attention feature fusion module (CAFFM). These combined elements are then directed to the reconstruction layer, ultimately producing the final denoised image. Empirical analyses conducted in environments characterized by Gaussian white noise and natural noise, spanning noise levels 15-50, indicate a marked enhancement in performance. This assertion is quantitatively corroborated by increased average values in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Feature Similarity Index for Color images (FSIMc), outperforming the outcomes of more than 20 existing methods across six varied datasets. Collectively, the network delineated in this research exhibits exceptional adeptness in image denoising. Simultaneously, it adeptly preserves essential image features such as edges and textures, thereby signifying a notable progression in the domain of image processing. The proposed model finds applicability in a range of image-centric domains, encompassing image processing, computer vision, video analysis, and pattern recognition.

4.
Biomed Mater ; 19(3)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38626778

RESUMO

Accurate segmentation of coronary artery tree and personalized 3D printing from medical images is essential for CAD diagnosis and treatment. The current literature on 3D printing relies solely on generic models created with different software or 3D coronary artery models manually segmented from medical images. Moreover, there are not many studies examining the bioprintability of a 3D model generated by artificial intelligence (AI) segmentation for complex and branched structures. In this study, deep learning algorithms with transfer learning have been employed for accurate segmentation of the coronary artery tree from medical images to generate printable segmentations. We propose a combination of deep learning and 3D printing, which accurately segments and prints complex vascular patterns in coronary arteries. Then, we performed the 3D printing of the AI-generated coronary artery segmentation for the fabrication of bifurcated hollow vascular structure. Our results indicate improved performance of segmentation with the aid of transfer learning with a Dice overlap score of 0.86 on a test set of 10 coronary tomography angiography images. Then, bifurcated regions from 3D models were printed into the Pluronic F-127 support bath using alginate + glucomannan hydrogel. We successfully fabricated the bifurcated coronary artery structures with high length and wall thickness accuracy, however, the outer diameters of the vessels and length of the bifurcation point differ from the 3D models. The extrusion of unnecessary material, primarily observed when the nozzle moves from left to the right vessel during 3D printing, can be mitigated by adjusting the nozzle speed. Moreover, the shape accuracy can also be improved by designing a multi-axis printhead that can change the printing angle in three dimensions. Thus, this study demonstrates the potential of the use of AI-segmented 3D models in the 3D printing of coronary artery structures and, when further improved, can be used for the fabrication of patient-specific vascular implants.


Assuntos
Algoritmos , Inteligência Artificial , Vasos Coronários , Impressão Tridimensional , Humanos , Vasos Coronários/diagnóstico por imagem , Aprendizado Profundo , Imageamento Tridimensional/métodos , Angiografia Coronária/métodos , Alginatos/química , Angiografia por Tomografia Computadorizada/métodos , Software
5.
Sci Rep ; 14(1): 9439, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38658603

RESUMO

This paper optimizes the 2D Wadell roundness calculation of particles based on digital image processing methods. An algorithm for grouping corner key points is proposed to distinguish each independent corner. Additionally, the cyclic midpoint filtering method is introduced for corner dealiasing, aiming to mitigate aliasing issues effectively. The relationships between the number of corner pixels (m), the central angle of the corner (α) and the parameter of the dealiasing degree (n) are established. The Krumbein chart and a sandstone thin section image were used as examples to calculate the 2D Wadell roundness. A set of regular shapes is calculated, and the error of this method is discussed. When α ≥ 30°, the maximum error of Wadell roundness for regular shapes is 5.21%; when 12° ≤ α < 30°, the maximum error increases. By applying interpolation to increase the corner pixels to the minimum number (m0) within the allowable range of error, based on the α-m0 relational expression obtained in this study, the error of the corner circle can be minimized. The results indicate that as the value of m increases, the optimal range interval for n also widens. Additionally, a higher value of α leads to a lower dependence on m. The study's results can be applied to dealiasing and shape analysis of complex closed contours.

6.
Sci Rep ; 14(1): 9136, 2024 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-38644440

RESUMO

Edge detection in images is a vital application of image processing in fields such as object detection and identification of lesion regions in medical images. This problem is more complex in the domain of color images due to the combination of color layer information and the need to achieve a unified edge boundary across these layers, which increases the complexity of the problem. In this paper, a simple and effective method for edge detection in color images is proposed using a combination of support vector machine (SVM) and the social spider optimization (SSO) algorithm. In the proposed method, the input color image is first converted to a grayscale image, and an initial estimation of the image edges is performed based on it. To this end, the proposed method utilizes an SVM with a Radial Basis Function (RBF) kernel, in which the model's hyperparameters are tuned using the SSO algorithm. After the formation of initial image edges, the resulting edges are compared with pairwise combinations of color layers, and an attempt is made to improve the edge localization using the SSO algorithm. In this step, the optimization algorithm's task is to refine the image edges in a way that maximizes the compatibility with pairwise combinations of color layers. This process leads to the formation of prominent image edges and reduces the adverse effects of noise on the final result. The performance of the proposed method in edge detection of various color images has been evaluated and compared with similar previous strategies. According to the obtained results, the proposed method can successfully identify image edges more accurately, as the edges identified by the proposed method have an average accuracy of 93.11% for the BSDS500 database, which is an increase of at least 0.74% compared to other methods.

7.
Sichuan Da Xue Xue Bao Yi Xue Ban ; 55(2): 447-454, 2024 Mar 20.
Artigo em Chinês | MEDLINE | ID: mdl-38645864

RESUMO

Objective: The fully automatic segmentation of glioma and its subregions is fundamental for computer-aided clinical diagnosis of tumors. In the segmentation process of brain magnetic resonance imaging (MRI), convolutional neural networks with small convolutional kernels can only capture local features and are ineffective at integrating global features, which narrows the receptive field and leads to insufficient segmentation accuracy. This study aims to use dilated convolution to address the problem of inadequate global feature extraction in 3D-UNet. Methods: 1) Algorithm construction: A 3D-UNet model with three pathways for more global contextual feature extraction, or 3DGE-UNet, was proposed in the paper. By using publicly available datasets from the Brain Tumor Segmentation Challenge (BraTS) of 2019 (335 patient cases), a global contextual feature extraction (GE) module was designed. This module was integrated at the first, second, and third skip connections of the 3D UNet network. The module was utilized to fully extract global features at different scales from the images. The global features thus extracted were then overlaid with the upsampled feature maps to expand the model's receptive field and achieve deep fusion of features at different scales, thereby facilitating end-to-end automatic segmentation of brain tumors. 2) Algorithm validation: The image data were sourced from the BraTs 2019 dataset, which included the preoperative MRI images of 335 patients across four modalities (T1, T1ce, T2, and FLAIR) and a tumor image with annotations made by physicians. The dataset was divided into the training, the validation, and the testing sets at an 8∶1∶1 ratio. Physician-labelled tumor images were used as the gold standard. Then, the algorithm's segmentation performance on the whole tumor (WT), tumor core (TC), and enhancing tumor (ET) was evaluated in the test set using the Dice coefficient (for overall effectiveness evaluation), sensitivity (detection rate of lesion areas), and 95% Hausdorff distance (segmentation accuracy of tumor boundaries). The performance was tested using both the 3D-UNet model without the GE module and the 3DGE-UNet model with the GE module to internally validate the effectiveness of the GE module setup. Additionally, the performance indicators were evaluated using the 3DGE-UNet model, ResUNet, UNet++, nnUNet, and UNETR, and the convergence of these five algorithm models was compared to externally validate the effectiveness of the 3DGE-UNet model. Results: 1) In internal validation, the enhanced 3DGE-UNet model achieved Dice mean values of 91.47%, 87.14%, and 83.35% for segmenting the WT, TC, and ET regions in the test set, respectively, producing the optimal values for comprehensive evaluation. These scores were superior to the corresponding scores of the traditional 3D-UNet model, which were 89.79%, 85.13%, and 80.90%, indicating a significant improvement in segmentation accuracy across all three regions (P<0.05). Compared with the 3D-UNet model, the 3DGE-UNet model demonstrated higher sensitivity for ET (86.46% vs. 80.77%) (P<0.05) , demonstrating better performance in the detection of all the lesion areas. When dealing with lesion areas, the 3DGE-UNet model tended to correctly identify and capture the positive areas in a more comprehensive way, thereby effectively reducing the likelihood of missed diagnoses. The 3DGE-UNet model also exhibited exceptional performance in segmenting the edges of WT, producing a mean 95% Hausdorff distance superior to that of the 3D-UNet model (8.17 mm vs. 13.61 mm, P<0.05). However, its performance for TC (8.73 mm vs. 7.47 mm) and ET (6.21 mm vs. 5.45 mm) was similar to that of the 3D-UNet model. 2) In the external validation, the other four algorithms outperformed the 3DGE-UNet model only in the mean Dice for TC (87.25%), the mean sensitivity for WT (94.59%), the mean sensitivity for TC (86.98%), and the mean 95% Hausdorff distance for ET (5.37 mm). Nonetheless, these differences were not statistically significant (P>0.05). The 3DGE-UNet model demonstrated rapid convergence during the training phase, outpacing the other external models. Conclusion: The 3DGE-UNet model can effectively extract and fuse feature information on different scales, improving the accuracy of brain tumor segmentation.


Assuntos
Algoritmos , Neoplasias Encefálicas , Glioma , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Glioma/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Imageamento Tridimensional/métodos
8.
J Imaging ; 10(4)2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38667984

RESUMO

Imaging from optical coherence tomography (OCT) is widely used for detecting retinal diseases, localization of intra-retinal boundaries, etc. It is, however, degraded by speckle noise. Deep learning models can aid with denoising, allowing clinicians to clearly diagnose retinal diseases. Deep learning models can be considered as an end-to-end framework. We selected denoising studies that used deep learning models with retinal OCT imagery. Each study was quality-assessed through image quality metrics (including the peak signal-to-noise ratio-PSNR, contrast-to-noise ratio-CNR, and structural similarity index metric-SSIM). Meta-analysis could not be performed due to heterogeneity in the methods of the studies and measurements of their performance. Multiple databases (including Medline via PubMed, Google Scholar, Scopus, Embase) and a repository (ArXiv) were screened for publications published after 2010, without any limitation on language. From the 95 potential studies identified, a total of 41 were evaluated thoroughly. Fifty-four of these studies were excluded after full text assessment depending on whether deep learning (DL) was utilized or the dataset and results were not effectively explained. Numerous types of OCT images are mentioned in this review consisting of public retinal image datasets utilized purposefully for denoising OCT images (n = 37) and the Optic Nerve Head (ONH) (n = 4). A wide range of image quality metrics was used; PSNR and SNR that ranged between 8 and 156 dB. The minority of studies (n = 8) showed a low risk of bias in all domains. Studies utilizing ONH images produced either a PSNR or SNR value varying from 8.1 to 25.7 dB, and that of public retinal datasets was 26.4 to 158.6 dB. Further analysis on denoising models was not possible due to discrepancies in reporting that did not allow useful pooling. An increasing number of studies have investigated denoising retinal OCT images using deep learning, with a range of architectures being implemented. The reported increase in image quality metrics seems promising, while study and reporting quality are currently low.

9.
J Imaging ; 10(4)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38667987

RESUMO

Spatial aspects of visual performance are usually evaluated through visual acuity charts and contrast sensitivity (CS) tests. CS tests are generated by vanishing the contrast level of the visual charts. However, the quality of retinal images can be affected by both ocular aberrations and scattering effects and none of those factors are incorporated as parameters in visual tests in clinical practice. We propose a new computational methodology to generate visual acuity charts affected by ocular scattering effects. The generation of glare effects on the visual tests is reached by combining an ocular straylight meter methodology with the Commission Internationale de l'Eclairage's (CIE) general disability glare formula. A new function for retinal contrast assessment is proposed, the subjective straylight function (SSF), which provides the maximum tolerance to the perception of straylight in an observed visual acuity test. Once the SSF is obtained, the subjective straylight index (SSI) is defined as the area under the SSF curve. Results report the normal values of the SSI in a population of 30 young healthy subjects (19 ± 1 years old), a peak centered at SSI = 0.46 of a normal distribution was found. SSI was also evaluated as a function of both spatial and temporal aspects of vision. Ocular wavefront measures revealed a statistical correlation of the SSI with defocus and trefoil terms. In addition, the time recovery (TR) after induced total disability glare and the SSI were related; in particular, the higher the RT, the greater the SSI value for high- and mid-contrast levels of the visual test. No relationships were found for low contrast visual targets. To conclude, a new computational method for retinal contrast assessment as a function of ocular straylight was proposed as a complementary subjective test for visual function performance.

10.
J Imaging ; 10(4)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38667985

RESUMO

Testing an intricate plexus of advanced software system architecture is quite challenging due to the absence of test oracle. Metamorphic testing is a popular technique to alleviate the test oracle problem. The effectiveness of metamorphic testing is dependent on metamorphic relations (MRs). MRs represent the essential properties of the system under test and are evaluated by their fault detection rates. The existing techniques for the evaluation of MRs are not comprehensive, as very few mutation operators are used to generate very few mutants. In this research, we have proposed six new MRs for dilation and erosion operations. The fault detection rate of six newly proposed MRs is determined using mutation testing. We have used eight applicable mutation operators and determined their effectiveness. By using these applicable operators, we have ensured that all the possible numbers of mutants are generated, which shows that all the faults in the system under test are fully identified. Results of the evaluation of four MRs for edge detection show an improvement in all the respective MRs, especially in MR1 and MR4, with a fault detection rate of 76.54% and 69.13%, respectively, which is 32% and 24% higher than the existing technique. The fault detection rate of MR2 and MR3 is also improved by 1%. Similarly, results of dilation and erosion show that out of 8 MRs, the fault detection rates of four MRs are higher than the existing technique. In the proposed technique, MR1 is improved by 39%, MR4 is improved by 0.5%, MR6 is improved by 17%, and MR8 is improved by 29%. We have also compared the results of our proposed MRs with the existing MRs of dilation and erosion operations. Results show that the proposed MRs complement the existing MRs effectively as the new MRs can find those faults that are not identified by the existing MRs.

11.
J Imaging ; 10(4)2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38667991

RESUMO

The continuous monitoring of civil infrastructures is crucial for ensuring public safety and extending the lifespan of structures. In recent years, image-processing-based technologies have emerged as powerful tools for the structural health monitoring (SHM) of civil infrastructures. This review provides a comprehensive overview of the advancements, applications, and challenges associated with image processing in the field of SHM. The discussion encompasses various imaging techniques such as satellite imagery, Light Detection and Ranging (LiDAR), optical cameras, and other non-destructive testing methods. Key topics include the use of image processing for damage detection, crack identification, deformation monitoring, and overall structural assessment. This review explores the integration of artificial intelligence and machine learning techniques with image processing for enhanced automation and accuracy in SHM. By consolidating the current state of image-processing-based technology for SHM, this review aims to show the full potential of image-based approaches for researchers, engineers, and professionals involved in civil engineering, SHM, image processing, and related fields.

12.
Tomography ; 10(4): 459-470, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38668393

RESUMO

BACKGROUND: Left atrial (LA) assessment is an important marker of adverse cardiovascular outcomes. Cardiovascular magnetic resonance (CMR) accurately quantifies LA volume and function based on biplane long-axis imaging. We aimed to validate single-plane-derived LA indices against the biplane method to simplify the post-processing of cine CMR. METHODS: In this study, 100 patients from Leeds Teaching Hospitals were used as the derivation cohort. Bias correction for the single plane method was applied and subsequently validated in 79 subjects. RESULTS: There were significant differences between the biplane and single plane mean LA maximum and minimum volumes and LA ejection fraction (EF) (all p < 0.01). After correcting for biases in the validation cohort, significant correlations in all LA indices were observed (0.89 to 0.98). The area under the curve (AUC) for the single plane to predict biplane cutoffs of LA maximum volume ≥ 112 mL was 0.97, LA minimum volume ≥ 44 mL was 0.99, LA stroke volume (SV) ≤ 21 mL was 1, and LA EF ≤ 46% was 1, (all p < 0.001). CONCLUSIONS: LA volumetric and functional assessment by the single plane method has a systematic bias compared to the biplane method. After bias correction, single plane LA volume and function are comparable to the biplane method.


Assuntos
Átrios do Coração , Imagem Cinética por Ressonância Magnética , Humanos , Imagem Cinética por Ressonância Magnética/métodos , Feminino , Masculino , Átrios do Coração/diagnóstico por imagem , Pessoa de Meia-Idade , Idoso , Volume Sistólico/fisiologia , Reprodutibilidade dos Testes , Adulto , Interpretação de Imagem Assistida por Computador/métodos
13.
J Biomed Opt ; 29(Suppl 2): S22706, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38638450

RESUMO

Significance: Three-dimensional quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. It has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw three-dimensional (3D) tomograms is not well-developed. We focus on a critical, yet often underappreciated, step of the analysis pipeline that of 3D cell segmentation from the acquired tomograms. Aim: We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the 3D segmentation of QPI images. Approach: The cell segmentation algorithm mimics the gemstone extraction process, initiating with a coarse 3D extrusion from a two-dimensional (2D) segmented mask to outline the cell structure. A 2D image is generated, and a segmentation algorithm identifies the boundary in the x-y plane. Leveraging cell continuity in consecutive z-stacks, a refined 3D segmentation, akin to fine chiseling in gemstone carving, completes the process. Results: The CellSNAP algorithm outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 s per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused AI-based segmentation tools. Conclusion: Our proposed method is less memory intensive and significantly faster than existing methods. The method can be easily implemented on a student laptop. Since the approach is rule-based, there is no need to collect a lot of imaging data and manually annotate them to perform machine learning based training of the model. We envision our work will lead to broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , 60704 , Algoritmos , Imagem Óptica
14.
Front Mol Biosci ; 11: 1346242, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38567100

RESUMO

Esophageal cancer (EC) remains a significant health challenge globally, with increasing incidence and high mortality rates. Despite advances in treatment, there remains a need for improved diagnostic methods and understanding of disease progression. This study addresses the significant challenges in the automatic classification of EC, particularly in distinguishing its primary subtypes: adenocarcinoma and squamous cell carcinoma, using histopathology images. Traditional histopathological diagnosis, while being the gold standard, is subject to subjectivity and human error and imposes a substantial burden on pathologists. This study proposes a binary class classification system for detecting EC subtypes in response to these challenges. The system leverages deep learning techniques and tissue-level labels for enhanced accuracy. We utilized 59 high-resolution histopathological images from The Cancer Genome Atlas (TCGA) Esophageal Carcinoma dataset (TCGA-ESCA). These images were preprocessed, segmented into patches, and analyzed using a pre-trained ResNet101 model for feature extraction. For classification, we employed five machine learning classifiers: Support Vector Classifier (SVC), Logistic Regression (LR), Decision Tree (DT), AdaBoost (AD), Random Forest (RF), and a Feed-Forward Neural Network (FFNN). The classifiers were evaluated based on their prediction accuracy on the test dataset, yielding results of 0.88 (SVC and LR), 0.64 (DT and AD), 0.82 (RF), and 0.94 (FFNN). Notably, the FFNN classifier achieved the highest Area Under the Curve (AUC) score of 0.92, indicating its superior performance, followed closely by SVC and LR, with a score of 0.87. This suggested approach holds promising potential as a decision-support tool for pathologists, particularly in regions with limited resources and expertise. The timely and precise detection of EC subtypes through this system can substantially enhance the likelihood of successful treatment, ultimately leading to reduced mortality rates in patients with this aggressive cancer.

15.
J Med Radiat Sci ; 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38571377

RESUMO

INTRODUCTION: Breast cancer (BC), the most frequently diagnosed malignancy among women worldwide, presents a public health challenge and affects mortality rates. Breast-conserving therapy (BCT) is a common treatment, but the risk from residual disease necessitates radiotherapy. Digital mammography monitors treatment response by identifying post-operative and radiotherapy tissue alterations, but accurate assessment of mammographic density remains a challenge. This study used OpenBreast to measure percent density (PD), offering insights into changes in mammographic density before and after BCT with radiation therapy. METHODS: This retrospective analysis included 92 female patients with BC who underwent BCT, chemotherapy, and radiotherapy, excluding those who received hormonal therapy or bilateral BCT. Percent/percentage density measurements were extracted using OpenBreast, an automated software that applies computational techniques to density analyses. Data were analysed at baseline, 3 months, and 15 months post-treatment using standardised mean difference (SMD) with Cohen's d, chi-square, and paired sample t-tests. The predictive power of PD changes for BC was measured based on the receiver operating characteristic (ROC) curve analysis. RESULTS: The mean age was 53.2 years. There were no significant differences in PD between the periods. Standardised mean difference analysis revealed no significant changes in the SMD for PD before treatment compared with 3- and 15-months post-treatment. Although PD increased numerically after radiotherapy, ROC analysis revealed optimal sensitivity at 15 months post-treatment for detecting changes in breast density. CONCLUSIONS: This study utilised an automated breast density segmentation tool to assess the changes in mammographic density before and after BC treatment. No significant differences in the density were observed during the short-term follow-up period. However, the results suggest that quantitative density assessment could be valuable for long-term monitoring of treatment effects. The study underscores the necessity for larger and longitudinal studies to accurately measure and validate the effectiveness of quantitative methods in clinical BC management.

16.
Sci Rep ; 14(1): 7768, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565548

RESUMO

Repeatability of measurements from image analytics is difficult, due to the heterogeneity and complexity of cell samples, exact microscope stage positioning, and slide thickness. We present a method to define and use a reference focal plane that provides repeatable measurements with very high accuracy, by relying on control beads as reference material and a convolutional neural network focused on the control bead images. Previously we defined a reference effective focal plane (REFP) based on the image gradient of bead edges and three specific bead image features. This paper both generalizes and improves on this previous work. First, we refine the definition of the REFP by fitting a cubic spline to describe the relationship between the distance from a bead's center and pixel intensity and by sharing information across experiments, exposures, and fields of view. Second, we remove our reliance on image features that behave differently from one instrument to another. Instead, we apply a convolutional regression neural network (ResNet 18) trained on cropped bead images that is generalizable to multiple microscopes. Our ResNet 18 network predicts the location of the REFP with only a single inferenced image acquisition that can be taken across a wide range of focal planes and exposure times. We illustrate the different strategies and hyperparameter optimization of the ResNet 18 to achieve a high prediction accuracy with an uncertainty for every image tested coming within the microscope repeatability measure of 7.5 µm from the desired focal plane. We demonstrate the generalizability of this methodology by applying it to two different optical systems and show that this level of accuracy can be achieved using only 6 beads per image.

17.
J Surg Case Rep ; 2024(4): rjae188, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38572284

RESUMO

The treatment of recurrent ovarian cancer has been based on systemic therapy. The role of secondary cytoreductive surgery has been addressed recently in several trials. Imaging plays a key role in helping the surgical team to decide which patients will have resectable disease and benefit from surgery. The role of staging laparoscopy and several imaging and clinical scores has been extensively debated in the field. In other surgical fields there have been reports of using 3D imaging software and 3D printed models to help surgeons better plan the surgical approach. To the best of our knowledge, we report the first case of a patient with recurrent ovarian cancer undergoing 3D modeling before secondary cytoreductive surgery. The 3D modeling was of most value to evaluate the extension of the disease in our patient who underwent a successful secondary cytoreductive surgery and is currently free of the disease.

18.
Med Biol Eng Comput ; 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38649629

RESUMO

Diabetic retinopathy disease contains lesions (e.g., exudates, hemorrhages, and microaneurysms) that are minute to the naked eye. Determining the lesions at pixel level poses a challenge as each pixel does not reflect any semantic entities. Furthermore, the computational cost of inspecting each pixel is expensive because the number of pixels is high even at low resolution. In this work, we propose a hybrid image processing method. Simple Linear Iterative Clustering with Gaussian Filter (SLIC-G) for the purpose of overcoming pixel constraints. The SLIC-G image processing method is divided into two stages: (1) simple linear iterative clustering superpixel segmentation and (2) Gaussian smoothing operation. In such a way, a large number of new transformed datasets are generated and then used for model training. Finally, two performance evaluation metrics that are suitable for imbalanced diabetic retinopathy datasets were used to validate the effectiveness of the proposed SLIC-G. The results indicate that, in comparison to prior published works' results, the proposed SLIC-G shows better performance on image classification of class imbalanced diabetic retinopathy datasets. This research reveals the importance of image processing and how it influences the performance of deep learning networks. The proposed SLIC-G enhances pre-trained network performance by eliminating the local redundancy of an image, which preserves local structures, but avoids over-segmented, noisy clips. It closes the research gap by introducing the use of superpixel segmentation and Gaussian smoothing operation as image processing methods in diabetic retinopathy-related tasks.

19.
Adv Sci (Weinh) ; : e2400844, 2024 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-38613834

RESUMO

Scaling in insect wings is a complex phenomenon that seems pivotal in maintaining wing functionality. In this study, the relationship between wing size and the size, location, and shape of wing cells in dragonflies and damselflies (Odonata) is investigated, aiming to address the question of how these factors are interconnected. To this end, WingGram, the recently developed computer-vision-based software, is used to extract the geometric features of wing cells of 389 dragonflies and damselfly wings from 197 species and 16 families. It has been found that the cell length of the wings does not depend on the wing size. Despite the wide variation in wing length (8.42 to 56.5 mm) and cell length (0.1 to 8.5 mm), over 80% of the cells had a length ranging from 0.5 to 1.5 mm, which was previously identified as the critical crack length of the membrane of locust wings. An isometric scaling of cells is also observed with maximum size in each wing, which increased as the size increased. Smaller cells tended to be more circular than larger cells. The results have implications for bio-mimetics, inspiring new materials and designs for artificial wings with potential applications in aerospace engineering and robotics.

20.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38610257

RESUMO

Images obtained in an unfavorable environment may be affected by haze or fog, leading to fuzzy image details, low contrast, and loss of important information. Recently, significant progress has been achieved in the realm of image dehazing, largely due to the adoption of deep learning techniques. Owing to the lack of modules specifically designed to learn the unique characteristics of haze, existing deep neural network-based methods are impractical for processing images containing haze. In addition, most networks primarily focus on learning clear image information while disregarding potential features in hazy images. To address these limitations, we propose an innovative method called contrastive multiscale transformer for image dehazing (CMT-Net). This method uses the multiscale transformer to enable the network to learn global hazy features at multiple scales. Furthermore, we introduce feature combination attention and a haze-aware module to enhance the network's ability to handle varying concentrations of haze by assigning more weight to regions containing haze. Finally, we design a multistage contrastive learning loss incorporating different positive and negative samples at various stages to guide the network's learning process to restore real and non-hazy images. The experimental findings demonstrate that CMT-Net provides exceptional performance on established datasets and exhibits superior visual outcomes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...